Goto

Collaborating Authors

 world look


AI teachers and cybernetics - what could the world look like in 2050?

BBC News

AI teachers and cybernetics - what could the world look like in 2050? The last 25 years has seen some mind-bending technological changes. At the start of the century, most computers connected to the internet with noisy dial-up connections, Netflix was an online DVD rental company, and the vast majority of people hadn't even heard of a smartphone. Fast forward two and a half decades, and innovations in AI, robotics and much else besides are emerging at an incredible rate. So we decided to ask experts what the next 25 years could bring.


Revealed: What the most stereotypical MEN around the world look like, according to AI - so, do you think they're accurate?

Daily Mail - Science & tech

If you were asked to visualise a stereotypical British man, what would you think of? According to AI, the answer is an overweight man wearing a football shirt. Instagram account @reimagineuk asked AI to create videos of the most stereotypical men around the world - with hilarious results. While the British man looks casual in his football shirt, men from other countries are depicted with fancier outfits. The stereotypical man from Portugal sports a white shirt and a waistcoat, while the man from Nigeria can be seen wearing a bright orange suit.


What Would the World Look Like if AI Wasn't Called AI?

#artificialintelligence

The field of AI could have many names. Artificial intelligence is probably the less accurate of all. When the founding fathers of AI met in 1956 to find a name for the field, the objective they had in mind was creating a machine with human-like intelligence, behavior, and even sentience. However, at that time neither hardware, software, nor data science were mature enough to achieve that goal. They were naive into thinking AGI was easily attainable.


Shield AI Fundamentals: On Mapping

#artificialintelligence

Written by Vibhav Ganesh, Senior Autonomy Engineer. As is evident in the name, the key component of any autonomous robotic system is its basic autonomy; its loop of perception, cognition, and action which enables a robot to determine what it should do, when it should do it, and how. In the following paragraphs, I will cover the topic of mapping, an integral component of the perception component of this loop. In order to effectively operate within a complex and dynamic environment, a robot must be able to represent its surroundings. To do so, it seeks the answer to the question, "What does the world look like?" by creating a digital representation of the world called a map.


Generative Models

@machinelearnbot

One of our core aspirations at OpenAI is to develop algorithms and techniques that endow computers with an understanding of our world. It's easy to forget just how much you know about the world: you understand that it is made up of 3D environments, objects that move, collide, interact; people who walk, talk, and think; animals who graze, fly, run, or bark; monitors that display information encoded in language about the weather, who won a basketball game, or what happened in 1970. This tremendous amount of information is out there and to a large extent easily accessible -- either in the physical world of atoms or the digital world of bits. The only tricky part is to develop models and algorithms that can analyze and understand this treasure trove of data. Generative models are one of the most promising approaches towards this goal.


Why Do We Need 120Hz/144Hz Monitors If The Human Eye Can't See Beyond 60Hz?

Forbes - Tech

Human eyes cannot see things beyond 60Hz, so why are the 120Hz/144Hz monitors better? Human eyes cannot see things beyond 60Hz. So why are the 120Hz/144Hz monitors better? The brain, not the eye, does the seeing. The eye transmits information to the brain, but some characteristics of the signal are lost or altered in the process.


This is how the world looks on Facebook's population maps

Engadget

Facebook's Connectivity Lab today released its high-resolution population maps for Malawi, South Africa, Ghana, Haiti and Sri Lanka, with the promise to make more datasets available over the coming months. The population maps are a joint effort between the Facebook Connectivity Lab, Columbia University and the World Bank, though Facebook is interested in the project as part of its effort to launch wireless communication services in rural regions around the globe. Facebook and friends used software to identify buildings in commercially available satellite images, and then estimated population using census data and a few other surveys and programs. Convolutional neural networks powered a model capable of identifying individual buildings in images from across the world. "There has been a lot of work recently on neural networks that can recognize individual buildings with very high accuracy, but these models are finely tuned on the local characteristics of the region where they are trained," the Connectivity Lab's Tobias Tiecke writes. "We found that these models do not perform well at a global scale with realistic amounts of training data.


Video Games Are So Realistic That They Can Teach AI What the World Looks Like

#artificialintelligence

Thanks to the modern gaming industry, we can now spend our evenings wandering around photorealistic game worlds, like the post-apocalyptic Boston of Fallout 4 or Grand Theft Auto V's Los Santos, instead of doing things like "seeing people" and "engaging in human interaction of any kind." Games these days are so realistic, in fact, that artificial intelligence researchers are using them to teach computers how to recognize objects in real life. Not only that, but commercial video games could kick artificial intelligence research into high gear by dramatically lessening the time and money required to train AI. "If you go back to the original Doom, the walls all look exactly the same and it's very easy to predict what a wall looks like, given that data," said Mark Schmidt, a computer science professor at the University of British Columbia (UBC). "But if you go into the real world, where every wall looks different, it might not work anymore." Schmidt works with machine learning, a technique that allows computers to "train" on a large set of labelled data--photographs of streets, for example--so that when let loose in the real world, they can recognize, or "predict," what they're looking at.